10 research outputs found

    Representation of synchronous, asynchronous, and polychronous components by clocked guarded actions

    Get PDF
    International audienceFor the design of embedded systems, many languages are in use, which are based on different models of computation such as event-, data-, and clock-driven paradigms as well as paradigms without a clear notion of time. Systems composed of such heterogeneous components are hard to analyze so that mainly co-simulation by coupling different simulators has been considered so-far. In this article, we propose clocked guarded actions as a unique intermediate representation that can be used as a common basis for simulation, analysis, and synthesis. We show how synchronous, (untimed) asynchronous, and polychronous languages can be translated to clocked guarded actions to demonstrate that our intermediate representation is powerful enough to capture rather different models of computation. Having a unique and composable intermediate representation of these components at hand allows one a simple composition of these components. Moreover, we show how clocked guarded actions can be used for verification by symbolic model checking and simulation by SystemC

    Clock Refinement in Imperative Synchronous Languages

    Get PDF
    An huge amount of computational models and programming languages have been proposed for the description of embedded systems. In contrast to traditional sequential programming languages, they cope directly with the requirements for embedded systems: direct support for concurrent computations and periodic interaction with the environment are only some of the features they offer. Synchronous languages are one class of languages for the development of embedded systems and they follow the fundamental principle that the execution is divided into a sequence of logical steps. Thereby, each step follows the simplification that the computation of the outputs is finished directly when the inputs are available. This rigorous abstraction leads to well-defined deterministic parallel composition in general, and to deterministic abortion and suspension in imperative synchronous languages in particular. These key features also allow to translate programs to hardware and software, and also formal verification techniques like model checking can be easily applied. Besides the advantages of imperative synchronous languages, also some drawbacks can be listed. Over-synchronization is an effect being caused by parallel threads which have to synchronize for each execution step, even if they do not communicate, since the synchronization is implicitly forced by the control-flow. This thesis considers the idea of clock refinement to introduce several abstraction layers for communication and synchronization in addition to the existing single-clock abstraction. Thereby, clocks can be refined by several independent clocks so that a controlled amount of asynchrony between subsequent synchronization points can be exploited by compilers. The declarations of clocks form a tree, and clocks can be defined within the threads of the parallel statement, which allows one to do independent computations based on these clocks without synchronizing the threads. However, the synchronous abstraction is kept at each level of the abstraction. Clock refinement is introduced in this thesis as an extension to the imperative synchronous language Quartz. Therefore, new program statements are introduced which allow to define a new clock as a refinement of an existing one and to finish a step based on a certain clock. Examples are considered to show the impact of the behavior of the new statements to the already existing statements, before the semantics of this extension is formally defined. Furthermore, the thesis presents a compile algorithm to translate programs to an intermediate format, and to translate the intermediate format to a hardware description. The advantages obtained by the new modeling feature are finally evaluated based on examples

    Clock Refinement in Imperative Synchronous Languages

    No full text
    An huge amount of computational models and programming languages have been proposed for the description of embedded systems. In contrast to traditional sequential programming languages, they cope directly with the requirements for embedded systems: direct support for concurrent computations and periodic interaction with the environment are only some of the features they offer. Synchronous languages are one class of languages for the development of embedded systems and they follow the fundamental principle that the execution is divided into a sequence of logical steps. Thereby, each step follows the simplification that the computation of the outputs is finished directly when the inputs are available. This rigorous abstraction leads to well-defined deterministic parallel composition in general, and to deterministic abortion and suspension in imperative synchronous languages in particular. These key features also allow to translate programs to hardware and software, and also formal verification techniques like model checking can be easily applied. Besides the advantages of imperative synchronous languages, also some drawbacks can be listed. Over-synchronization is an effect being caused by parallel threads which have to synchronize for each execution step, even if they do not communicate, since the synchronization is implicitly forced by the control-flow. This thesis considers the idea of clock refinement to introduce several abstraction layers for communication and synchronization in addition to the existing single-clock abstraction. Thereby, clocks can be refined by several independent clocks so that a controlled amount of asynchrony between subsequent synchronization points can be exploited by compilers. The declarations of clocks form a tree, and clocks can be defined within the threads of the parallel statement, which allows one to do independent computations based on these clocks without synchronizing the threads. However, the synchronous abstraction is kept at each level of the abstraction. Clock refinement is introduced in this thesis as an extension to the imperative synchronous language Quartz. Therefore, new program statements are introduced which allow to define a new clock as a refinement of an existing one and to finish a step based on a certain clock. Examples are considered to show the impact of the behavior of the new statements to the already existing statements, before the semantics of this extension is formally defined. Furthermore, the thesis presents a compile algorithm to translate programs to an intermediate format, and to translate the intermediate format to a hardware description. The advantages obtained by the new modeling feature are finally evaluated based on examples

    REVIEW Open Access

    Get PDF
    Clock refinement in imperative synchronous language

    Constructive Polychronous Systems

    No full text
    International audienceThe synchronous paradigm provides a logical abstraction of time for reactivesystem design which allows automatic synthesis of embedded systems that behave in a predictable, timely, and reactive manner. According to the synchronyhypothesis, a synchronous model reacts to inputs by generating outputs thatare immediately made available to the environment. While synchrony greatlysimplifies the design of complex systems in general, it can sometimes lead tocausal cycles. In these cases, constructiveness is a key property to guaranteethat the output of each reaction can still be always algorithmically determined.Polychrony deviates from perfect synchrony by using a partially ordered,i.e., a relational model of time. It encompasses the behaviors of (implicitly)multi-clocked data-flow networks of synchronous modules and can analyze andsynthesize them as GALS systems or Kahn process networks (KPNs).In this paper, we present a unified constructive semantic framework usingstructured operational semantics, which encompasses both the constructive behavior of synchronous modules and the multi-clocked behavior of polychronousnetworks. Along the way, we define the very first executable operational seman-tics of the polychronous languageSigna

    Constructive Polychronous Systems

    Get PDF
    International audienceThe synchronous paradigm provides a logical abstraction of time for reactive system design which allows automatic synthesis of embedded programs that behave in a predictable, timely and reactive manner. According to the synchrony hypothesis, a synchronous model reacts to input events and generates outputs that are immediately made available. But even though synchrony greatly simplifies design of complex systems, it often leads to rejecting models when data dependencies within a reaction are ill-specified, leading to causal cycles. Constructivity is a key property to guarantee that the output during each reaction can be algorithmically determined. Polychrony deviates from perfect synchrony by using a partially ordered or relational model of time. It captures the behaviors of (implicitly) multi-clocked data-flow networks and can analyze and synthesize them to GALS systems or to Kahn process networks (KPNs). In this paper, we provide a unified constructive semantic framework, using structural operational semantics, which captures the behavior of both synchronous modules and multi-clocked polychronous processes. Along the way, we define the very first operational semantics of Signal

    Embedding Polychrony into Synchrony

    Get PDF
    International audienceThis article presents an embedding of polychronous programs into synchronous ones. Due to this embedding, it is not only possible to deepen the understanding of these different models of computation but more importantly, it is possible to transfer compilation techniques that were developed for synchronous programs to polychronous programs. This transfer is nontrivial because the underlying paradigms differ more than their names suggest: since synchronous systems react deterministically to given inputs in discrete steps, they are typically used to describe reactive systems with a totally ordered notion of time. In contrast, polychronous system models entail a partially ordered notion of time, and are most suited to interface a system with an asynchronous environment by specifying input/output constraints from which a deterministic controller may eventually be refined and synthesized. As particular examples for the mentioned crossfertilization, we show how a simulator and a verification backend for synchronous programs can be made available to polychronous specifications, which is a first step towards integrating heterogeneous models of computation
    corecore